NVIDIA Networking Docs 您所在的位置:网站首页 username must be more than 3 NVIDIA Networking Docs

NVIDIA Networking Docs

2023-03-30 15:41| 来源: 网络整理| 查看: 265

Installation RequirementsPrior to the installation process, make sure that:A supported version of Linux is installed on your machine as listed below.Python 2.7 is installed on the machine.

You have https/http access from your client machine (on which a browser is running) to the machine that you intend to run Mellanox NEO™ on. 

The default access protocol is https.

The ports listed below are not being used by another application running on the same machine/VM as NEO.System Requirements

The platform and server requirements for Mellanox NEO are detailed in the following sections:

Mellanox NEO Server RequirementsPlatformType and Version (Up to 20 Nodes)Type and Version (Above 20 Nodes)

OS

RedHat/CentOS 7.4, 7.5, 7.6

CPU

8-core server and above

24-core server and above

RAM

16GB and above

32GB and above

Disk

1G + 500MB per switch

 In order for IP Discovery to load, DNS should be configured properly on installed machine or hostname should be defined at /etc/hosts file.

 If a telemetry agent has more than 50 switches, it must work with an SSD memory.

Ports Used by NEO ApplicationPortsProtocolDescription8086TCPInfluxDB8088TCPInfluxDB

2094

TCP

Telegraf

7658

TCP

NEO GRPC collector used for collection of buffer threshold events

5000

TCP

Used as the web server gateway interface

5555, 5601-5620

TCP

Used to communicate between NEO services

7654

TCP

Used to communicate between NEO telemetry agent and NEO

8701

TCP

Inter process communication

8090-8092, 8605-8608, 8095-8099

TCP

NEO services REST API

6306

UDP

NEO device IP discovery

162

UDP

SNMP traps receiver

Mellanox NEO GUI Client RequirementsSupported BrowserBrowser Version

Internet Explorer

11 and above

Chrome

62 and above

Firefox

56 and above

Safari

11.0 and above

In order for NEO GUI client to work properly, flash player should be installed and enabled for client browser. For more information regarding flash player installation and enabling, refer to the following link: https://helpx.adobe.com/flash-player.html.

Recommended Screen ResolutionsScreen TypeScreen SizeRecommended Resolution

Desktop

23“

1920 X 1080

Laptop

15"

1366 X 786

Tablet

9.7"

1024 X 768

Supported Mellanox Managed SystemsSystem TypePlatformDeviceSoftware Version

Ethernet

Mellanox SN2000 Series

SN2010SN2100SN2100BSN2410SN2410BSN2700SN2700BSN2740

Mellanox Onyx v3.8.2004 or newerCumulus Linux 3.6.2 or newer

HPE M-Series

SN2100M, SN2410M, SN2410bM, SN2700M

Mellanox Onyx v3.8.2004 or newer

Edgecore

AS4610

Cumulus Linux 3.2 or newer

For Cumulus switches, it is recommended to run the Initial-Discovery-Setting template to allow NEO to fully discover the switch information.

Supported Platforms and Operating SystemsPlatformOperating System

Bare metal server

RedHat/CentOS 7.x

Virtualized Environment

Linux virtualization

RedHat/CentOS 7.x

Microsoft Hyper-V virtualization

Windows Server 2008 R2

Windows Server 2012

Windows Client 10

Windows Server 2016

VMware virtualization

VMware Workstation 15.1.0

ESXi 6.7.0

Oracle VirtualBox

6.08

Supported 3rd Party Managed Switch Systems

The following are the supported 3rd party managed switch systems and software by Mellanox NEO:

Arista DCS-7050S1Cisco Nexus 3064Cisco 2960 (without provisioning)Juniper QFX3500HPE 5900AFBrocade VDX6740

For more information about Mellanox NEO 3rd party managed systems, please refer to Mellanox NEO Solutions Community page, and select the current release's Plugins page.

Managed Hosts Supported by Mellanox NEOLinuxWindows Downloading Mellanox NEOUsing the MyMellanox Account

If you do not have an active support contract, skip the below steps, and follow the next procedure instead.

To download Mellanox NEO software:

Log into MyMellanox.Go to Software -->Management Software --> Mellanox NEO.Click the “Downloads” tab and click the software image.Click “Download”.From the Mellanox Website

If you have a valid support contract, follow the previous procedure instead.

Go to the Mellanox NEO product page on the Mellanox website.Click the “Download Software” button.Fill-in the short form and click “Submit”.A direct link to the image download will be sent to the email address you have specified in the form. Installing Mellanox NEO

The default Mellanox NEO™ installation directory is /opt/neo.

To install Mellanox NEO™ software:

Copy the Mellanox NEO™ installation package to a local temporary directory (example: /tmp).

Enter the temporary directory.

cd /tmp

Extract the Mellanox NEO™ installation package.

tar zxvf neo-2.4.0-x.el7.tar.gz

Enter the new created directory.

cd neo

Install Mellanox NEO™.

./neo-installer.sh In case a previous Mellanox NEO™ installation is detected, you will be asked to confirm proceeding with the upgrade procedure. Type “y” to proceed. See Upgrading Mellanox NEO below for more information.

[Optional] In order to use more provisioning templates of Mellanox NEO™ supported system types (Linux hosts, Windows hosts, Arista switches and Cisco switches), you may download and install Mellanox NEO™ external RPMs. For further details on how to download and install Mellanox NEO™ external RPMs, please refer to the community post "HowTo Install NEO Plugins".

You can download and install the external RPMs also after Mellanox NEO™ is up and running.

[Optional] Run Mellanox NEO™ manually after the installation is completed.

/opt/neo/neoservice start

During the installation process, a warning message will display when ntp is not configured. To resolve it, please install ntp and run ntpd process.

Installing NEO for High Availability

NEO High Availability (HA) deployment is composed of a three-node cluster (based on CentOS 7.x) installed with NEO software. The high availability mechanism for NEO application is based on two standard Linux mechanisms:

Pacemaker cluster resource manager – responsible for detection and recovery of machine and application-level failures.Rsync – responsible for synchronizing all file systems between the three cluster nodes.PrerequisitesCentOS v7.x installed on your machine, where High Availability is exclusively supported.

Configure ssh trust between the three nodes. Run on each of the three nodes: 

ssh-keygen ssh-copy-id user@host Installing Mellanox NEO Cluster

Install NEO separately on each node. For further information, refer to Installing Mellanox NEO section above.

Configuring Mellanox NEO Cluster

The following steps are performed on one node only but will automatically apply to the other two nodes once the cluster is started.

Choose one node and update the parameters in its yaml file, located at: /opt/neo/common/conf/ha.yaml:

[Optional] Hacluster_password - this parameter is set by default to use a pre-configured password. For changing the password, please contact Mellanox Support.[Optional] ha_file_sync - the periodic time for syncing the persistent data. Default value is 300 seconds, minimum value is 100 seconds.ha_nodes - the IP addresses of the three nodes on which NEO is installed, in addition to their priority:local_ip - the IP of the node that is part of the HA clusterpriority - either 1, 2 or 3 according to their mode (active/stand-by). Node priority is only considered on the first NEO startup.virtualIP - the virtual IP address for the web UI. This IP is the gateway for all nodes.

Once configuration is completed successfully, make sure to start NEO and check its status as described in the section below.

Operating Mellanox NEO Cluster

Mellanox NEO user can start, stop, or restart Mellanox NEO cluster, or check its status at any time.

To start Mellanox NEO cluster, run: 

/opt/neo/neocluster start

To check Mellanox NEO cluster status, run: 

/opt/neo/neocluster status

To stop Mellanox NEO cluster, run: 

/opt/neo/ neocluster stop

To restart Mellanox NEO cluster, run: 

/opt/neo/ neocluster restart Deploying NEO Virtual Appliances

NEO application supports several Virtual Appliances for selected Hypervisors for easier deployment. Before deploying the NEO VM on Windows 2016, make sure to disable the following security settings so you can access the UI from the host machine:

Click the Start Button and Launch the Server Manager: Click on “Local Server”.In the “Properties” window, make sure the “IE Enhanced Security Configuration” is set to “On”. Turn off the “IE ESC for Administrators and/or for Users”, and click OK: Restart the browser, and attempt logging-in.Deploying NEO Virtual Appliance on Linux KVM

Go to the VM host (hypervisor) storage directory:

cd /images

Copy your release image to the VM host:

cp /release/vm/neo-1.4.9-10.qcow2

Run: 

virt-manager & Create a new Virtual Machine (VM):Choose to “Import existing disk image” for installing the OS:

Provide the storage path, and choose the operating system as Linux Red Hat 7.3 or above:

Specify the memory usage and the number of CPUs:

As the memory usage and the number of CPUs get higher, the performance improves. Memory usage should be at least 8192 MB.

Enter a name for the VM. If you wish to configure the NIC card, select “Customize configuration before install”:

If you wish to set a fixed MAC address, do so in the NIC section of the VM configuration:Once creating the new VM is done successfully, the following screen with the hostname and login username will appear:Log into the VM and use the following credentials:Username: rootPassword: 123456

Stop the NEO service. Run: 

cd /opt/neo ./neoservice stop

Verify the date and time zone are configured properly: 

date

If you need to update the time zone, follow the steps below: 

Delete the current “localtime” file under /etc/ directory. Run: 

cd /etc

Remove the local time. Run: 

rm localtime

Select a time zone:  

ln -s /usr/share/zoneinfo/US/Pacific localtime

Check the hostname resolution: 

hostname –i Make sure you received your local IP.

Start NEO: Run: 

cd /opt/neo ./neoservice start Make sure you can access the VM through your browser.

Deploying NEO Virtual Appliance on VirtualBox

NEO VM uses 64-bit architecture. If you have 32-bit versions of Operating Systems, virtualization might not be enabled on your machine, and an error message of unavailable hardware acceleration will appear. In this case, make sure to enable virtualization through BIOS.

In order to enable virtualization through BIOS, follow the steps below:

Click “File” and choose “Import Appliance”.

Choose the path for the ovf file in the VM files and click Next.

Click “Import” to import the VM into VirtualBox. After this step, the VM will be imported and ready to explore NEO on it.

Choose “vm” and click start to run it.

Once the VM starts, log in using the following credentials:Username: rootPassword: 123456

Run ifconfig to display the interfaces. As can be seen below, eth0 has already acquired an IP on the network:

The MAC address that is assigned to the VM must be on DHCP records in order to get an IP address from the VM.

Log in to NEO GUI using the IP found in the previous step (http:///neo) with the following credentials:Username: adminPassword: 123456

Deploying NEO Virtual Appliance on VMware WorkstationClick “File” --> “Open” and open the ovf template.

Click “import” to start the NEO VM import process. 

The VM can then be seen imported:

Click “Power on this virtual machine” to start the VM. Use the following credentials:Username: rootPassword: 123456

Run ifconfig to display the interfaces. As can be seen below – eth0 already acquired an IP.  

The MAC address that is assigned to VM must be on DHCP records in order to get an IP address from the VM.

Log in to NEO GUI using the IP found in the previous step (http:///neo) with the following credentials:Username: admin

Password: 123456 

If the VM does not succeed at gaining an IP, check the “Automatic Settings” under Edit à Virtual Network Editor. Make sure to untick the checkbox of VirtualBox which is installed on your machine, and then reboot the VM so it can acquire an IP.

Deploying NEO Virtual Appliance on VMware ESXi ServerConnect to ESXi machine using vSphere Client.Click “File” and choose “Deploy OVF Template…”.Choose the path for the OVF template and go through the pages by clicking “Next”.Click “Finish” to start deploying.Right-click on the VM and choose “Open Console” and power on the machine.

Use the following credentials to log in to the machine:Username: rootPassword: 123456

Run ifconfig to display interfaces. As can be seen below – the VM has already acquired an IP.

The MAC address that is assigned to VM must be on DHCP records in order to get an IP address from the VM.

Log in to NEO GUI using the IP found in the previous step (http:///neo) with the following credentials: Username: admin Password: 123456Installing NEO Virtual Appliance on Hyper-VLaunch the Hyper-VClick Actionà”Virtual Switch Manager”Create a new external virtual switch:

Provide a name and make sure you choose the right network adapter connected to the management network:

Click “Action”à New à “Virtual Machine”. 

Click “Next” in the New Virtual Machine Wizard window. 

Specify the neo-vm name in the Specify Name and Location menu.

Click “Next”.Chose the desired generation in the Specify Generation. 

Click “Next”.Set the memory size to 8192MB minimum in the When Assigning Memory menu.

Click “Next”.Use the virtual switch that appears in the Connection drop down menu in the Configure Network menu.

Click “Next”.Choose “Use an existing virtual hard disk” and Browse to the neo-v vhd file in the Connect Virtual Hard Disk menu.

Click “Next”.Click “Finish” in the Summary menu when displayed.

Right click and choose Connect once you see the neo-vm on your Hyper-VSelect Start from the Action Menu to start the VM.

Use the following credentials to login to your VMUsername: rootPassword: 123456Installing NEO as a Docker Container 

Install Docker CE on CentOS 7.X: 

yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum-config-manager --enable docker-ce-edge yum-config-manager --enable docker-ce-testing yum makecache fast yum -y install --setopt=obsoletes=0 docker-ce-17.03.2.ce-1.el7.centos.x86_64 docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch

Run the container: 

cp /tmp Service docker start gzip -d docker load -i /tmp/ docker images (To get image id) docker run -dit --network host -v /dev/log:/dev/log --privileged /usr/sbin/init Configure the httpd host:

In the host (not inside the container) create the following file: etc/httpd/conf.d/neo.conf with this content: 

ProxyPass http://127.0.0.1:3080/neo ProxyPassReverse http://127.0.0.1:3080/neo

Run: service httpd restart

Get/find the container ID by running in the host (not inside the container)):

docker ps

Make sure NEO is not running on the Linux host machine before starting the NEO on the container.

Start NEO on the container: 

docker exec –it /bin/bash cd /opt/neo ./neoservice start

If the device was rebooted, the running instance will disappear and a new instance should be run.

Installing What Just Happened Using NEO on an Onyx Switch

NEO is Mellanox Monitoring and Intent Based networking product, NEO gives the user an out-of-box, built-in WJH dashboard to view current and historical WJH data from the managed Mellanox Spectrum® switches. In order to do so, NEO rely on InfluxDB and Switch Telemetry Agents on the switches as part of the solution. The telemetry data can be visualized and queried by using either NEO or any FOSS visualizations available. To get the telemetry data into the database of choice, a Switch Telemetry Agent is used to pull, parse, and apply logic and stream out of the Mellanox Spectrum® switch.

To enable WJH using NEO, the Telemetry Agent must be installed in a docker container on the switch.

WJH is only supported through CLI with Web UI or using NEO, but not in parallel.

First, the switch must be added to NEO. To do that, go to “Managed Elements” tab and click “+Add” button:Fill the “Device IP”, “System Type”, and press “+” and “Add Devices” button:

Install the Telemetry Agent into this container. To do that, right-click on the switch in the “Managed Elements” page, select “NEO Telemetry Agent” and then click “Install”:

Telemetry Agent installation will enable the docker in the switch automatically.

Wait until the Telemetry Agent is loaded on the switch:WJH installation is completed.

Enable the WJH session using one of the following options:

Go to the “Dashboard” page and select “What Just Happened”.Press "Enable".

or:

Go to "Telemetry", "Streaming" page.Right click in "NEO WJH" and choose "Enable" it.

7. WJH is now installed and enabled.

To see WJH in action, go to the “Dashboard” page and select “What Just Happened”:

For further information, see section What Just Happened?. 

Upgrading Mellanox NEO

In order to upgrade the Mellanox NEO™ software, follow the steps below:

Stop the Mellanox NEO™ services:

/opt/neo/neoservice stop Copy the Mellanox NEO™ installation package to a local temporary directory (for example: /tmp).

Enter the temporary directory: 

cd /tmp

Extract the Mellanox NEO™ installation package:

tar zxvf neo-2.4.0-5.el6.tar.gz

Enter the new created directory:

cd neo

Install Mellanox NEO™:

./neo-installer.sh

In case a previous Mellanox NEO™ installation is detected, you will be asked to confirm proceeding with the upgrade. Type “y” to proceed.

Note: If there is a conflict between the current installed RPMs and the new RPMs that NEO needs to install, you might be asked to confirm proceeding with the upgrade process twice:

          

This will only occur when upgrading from NEO v1.5. Before clicking 'y', make sure the RPMs do not have any dependencies that are not related to NEO.

[Optional] Run Mellanox NEO™ manually after the installation is completed.

/opt/neo/neoservice start

 [Optional] In order to use more provisioning templates of Mellanox NEO™ supported system types (Linux hosts, Windows hosts, Arista switches and Cisco switches), you may download and install Mellanox NEO™ external RPMs. For further details on how to download and install Mellanox NEO™ external RPMs, please refer to the community post "HowTo Install NEO Plugins".

You can download and install the external RPMs also after Mellanox NEO™ is up and running.

Uninstalling Mellanox NEO

To uninstall the Mellanox NEO software, please run the following command:

/opt/neo/neo-uninstaller.sh Uninstalling NEO as a Docker Container

To uninstall NEO as a Docker Container, please run the following commands:

docker stop docker rm



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有